Stereoscopy and How it Works
Calling modern 3D “stereoscopic” is cheating a little we must admit, as all attempts at 3D images, be they 1930’s anaglyph or modern high tech alternatives, all centre around viewing an object or image from two points. Your own vision for example is stereoscopic, viewing the world from two separate points and then amalgamating them into a 3D representation of the world around you.
It’s the reason you’re able to determine depth in objects (unless you’re a Cyclops that is) and it’s what makes traditional video and even modern games occasionally frustrating as your monitor lacks that crucial third dimension required to accurately judge distance. While your brain is able to compensate to some degree by interpreting the size of shown surroundings in a game, it makes portraying scale notably difficult.
All the 3D technologies we’ve already discussed work in similar ways (albeit with differences in the creation of the image) by displaying two similar but distinctly different images of an object or scene to each eye.
The hype around 3D cinema is somewhat detached from the inherent dorkiness
The images are recorded from slightly different perspectives by using either dual cameras or a split lens to create the footage and then both images are displayed simultaneously, with the iconic cardboard glasses (be they red and green or polarising) used to filter the image meant for the other eye. This results in each of your eyes seeing the same image from a slightly different angle, creating the illusion of a 3D effect as your brain is tricked into interpreting the image as three dimensional.
Sadly though, and as we’ve already mentioned, this only really works when the media in question has been originally recorded in 3D, with 2D outputs “upscaled” into 3D struggling to look convincing. Unless some seriously sophisticated conversion technology comes along, it’ll be a long while before you’ll be watching your favourite classic movies in 3D.
However, the idea of 3D gaming is much more exciting, as no matter the game; every time you play your system is rendering the game in realtime, reproducing the exact same images every time you play, with different visual and post processing effects like anti aliasing or DirectX 10 shaders able to be added at a later date as the game is rendered once again using more advanced technologies.
3D gaming has yet failed to take off, but the future is looking mighty tasty
This means that real time rendered 3D graphics are much more open to conversion to a three dimensional display than two dimensional media, which is exactly what Nvidia has been working towards with its GeForce 3D Vision, set to launch in the next months.
By altering the way that the game is rendered at a driver level, Nvidia has been able to convert almost every modern 3D game engine into displaying as a readymade 3D image. How it’s able to do this is a little more complex though, and requires some explanation.
When a compatible game is rendered, a stereoscopic 3D addition to the standard ForceWare driver kicks in, rendering alternating frames from slightly different views. Using data from the game engine’s Z-buffer, the software is able to create a 3D representation of the game engine that, rather than just duplicate and offset the image to create a 3D effect as Zalman’s Triton monitor did, dynamically alters the depth and thus distance of in game objects.
It’s extremely ambitious to say the least, yet Nvidia already claims that the technology support for most modern titles, including
almost every triple A game released in the last three years and is also pledging support for every new title of note.
Want to comment? Please log in.